Lift: Multi-Label Learning with Label-Specific Features
نویسندگان
چکیده
منابع مشابه
Multi-Label Learning with Weak Label
Multi-label learning deals with data associated with multiple labels simultaneously. Previous work on multi-label learning assumes that for each instance, the “full” label set associated with each training instance is given by users. In many applications, however, to get the full label set for each instance is difficult and only a “partial” set of labels is available. In such cases, the appeara...
متن کاملMulti-Label Learning with Label Enhancement
Multi-label learning deals with training instances associated with multiple labels. Many common multi-label algorithms are to treat each label in a crisp manner, being either relevant or irrelevant to an instance, and such label can be called logical label. In contrast, we assume that there is a vector of numerical label behind each multi-label instance, and the numerical label can be treated a...
متن کاملMulti-Instance Multi-Label Learning with Weak Label
Multi-Instance Multi-Label learning (MIML) deals with data objects that are represented by a bag of instances and associated with a set of class labels simultaneously. Previous studies typically assume that for every training example, all positive labels are tagged whereas the untagged labels are all negative. In many real applications such as image annotation, however, the learning problem oft...
متن کاملExploiting Label Relationship in Multi-Label Learning
In many real data mining tasks, one data object is often associated with multiple class labels simultaneously; for example, a document may belong to multiple topics, an image can be tagged with multiple terms, etc. Multi-label learning focuses on such problems, and it is well accepted that the exploitation of relationship among labels is crucial; actually this is the essential difference betwee...
متن کاملSupplementary : Extreme Multi-label Learning with Label Features for Warm-start Tagging, Ranking & Recommendation
Section 1 presents the pseudocodes for SwiftXML training and prediction algorithms. Section 2 reports complete set of experimental results comparing SwiftXML to various baselines in terms of both propensity-scored precisions (PSP1,PSP3,PSP5) as well as standard precisions (P1,P3,P5). Section 3 shows the derivations for individual steps of the alternating minimization algorithm used for node par...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence
سال: 2015
ISSN: 0162-8828,2160-9292,1939-3539
DOI: 10.1109/tpami.2014.2339815